15 research outputs found

    A Graph Transformation Approach for Modeling and Verification of UML 2.0 Sequence Diagrams

    Get PDF
    Unified Modeling Language (UML) 2.0 Sequence Diagrams (UML 2.0 SD) are used to describe interactions in software systems. These diagrams must be verified in the early stages of software development process to guarantee the production of a reliable system. However, UML 2.0 SD lack formal semantics as all UML specifications, which makes their verification difficult, especially if we are modeling a critical system where the automation of verification is necessary. Communicating Sequential Processes (CSP) is a formal specification language that is suited for analysis and has many automatic verification tools. Thus, UML and CSP have complementary aspects, which are modeling and analysis. Recently, a formalization of UML 2.0 SD using CSP has been proposed in the literature; however, no automation of that formalization exists. In this paper, we propose an approach on the basis of the above formalization and a visual modeling tool to model and automatically transform UML 2.0 SD to CSP ones; thus, the existing CSP model checker can verify them. This approach aims to use UML 2.0 SD for modeling and CSP and its tools for verification. This approach is based on graph transformation, which uses AToM3 tool and proposes a metamodel of UML 2.0 SD and a graph grammar to perform the mapping of the latter into CSP. Failures-Divergence Refinement (FDR) is the model checking tool used to verify the behavioral properties of the source model transformation such as deadlock, livelock and determinism. The proposed approach and tool are illustrated through a case study

    Une approche multi-agent pour la segmentation d'images de profondeur

    Get PDF
    National audienceDans cet article, nous présentons et nous évaluons une approche multi-agent pour la segmentation d’images de profondeur. L’approche consiste en l’utilisation d’une population d’agents autonomes pour la segmentation d’une image de profondeur en ses différentes régions planes. Les agents s’adaptent aux régions sur lesquelles ils se déplacent, puis effectuent des actions coopératives et compétitives produisant une segmentation collective de l’image. Un champ de potentiel artificiel est introduit afin de coordonner les mouvements des agents et de leur permettre de s’organiser autour des pixels d’intérêt. Les résultats expérimentaux obtenus par des images réelles montrent le potentiel de l’approche proposée pour l’analyse des images de profondeurs, et ce vis-à-vis de l’efficacité de segmentation et de la fiabilité des résultats

    Une approche multi-agent pour la segmentation d'images de profondeur à base d'objets polyédriques. Une nouvelle approche de segmentation d'images

    No full text
    National audienceIn this paper, we present and evaluate a multi-agent approach for the segmentation of range images containing polyhedral objects. The approach consists in using a population of autonomous agents to segment a range image in its planar regions. The agents adapt to the regions on which they move, then perform cooperative and competitive actions allowing a collective segmentation of the image. The experimental results show the potential of the proposed approach for range image analysis, regarding especially segmentation reliability.Dans cet article, nous présentons et évaluons une approche multi-agent pour la segmentation d'images de profondeur contenant des objets polyédriques. L'approche consiste en l'utilisation d'une population d'agents autonomes pour la segmentation d'une image de profondeur en ses différentes régions planes. Les agents s'adaptent aux régions sur lesquelles ils se déplacent, puis effectuent des actions coopératives et compétitives produisant une segmentation collective de l'image. Les résultats expérimentaux montrent le potentiel de l'approche proposée pour l'analyse des images de profondeur, notamment vis-à-vis de la fiabilité de segmentation

    Cross-Modal Learning for Audio-Visual Emotion Recognition in Acted Speech

    No full text
    International audienceHuman affects and automatic emotions detection has been an active research topic for several years, with outcomes that have been beneficial to a number of different applications, including human-machine interaction, health-care, e-education, etc. Indeed, humans perceive and express emotions in a multi-modal manner. In order to capture this multimodal emotional content, robust features need to be extracted and combined efficiently. In this paper, we propose a cross-modal deep learning framework that leverages audio and visual information (speech and facial emotions) for emotion recognition in acted speech. The proposed method learns the spatio-temporal information of facial expressions and the audio features in an end-to-end fashion. The experiments were conducted on the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS). Proposed method achieves a classification accuracy of 87.5% outperforming state-of-the-art works on RAVDES

    Cross-Modal Learning for Audio-Visual Emotion Recognition in Acted Speech

    No full text
    International audienceHuman affects and automatic emotions detection has been an active research topic for several years, with outcomes that have been beneficial to a number of different applications, including human-machine interaction, health-care, e-education, etc. Indeed, humans perceive and express emotions in a multi-modal manner. In order to capture this multimodal emotional content, robust features need to be extracted and combined efficiently. In this paper, we propose a cross-modal deep learning framework that leverages audio and visual information (speech and facial emotions) for emotion recognition in acted speech. The proposed method learns the spatio-temporal information of facial expressions and the audio features in an end-to-end fashion. The experiments were conducted on the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS). Proposed method achieves a classification accuracy of 87.5% outperforming state-of-the-art works on RAVDES

    A Multi-agent Approach for Range Image Segmentation

    No full text
    Abstract. We present and evaluate in this paper a multi-agent approach for range image segmentation. The approach consists in using autonomous agents for the segmentation of a range image in its different planar regions. Agents move on the image, and perform local actions on the pixels allowing robust region extraction and edge detection. In order to deal with inaccuracy of segmentation results due to distributed and competitive actions performed by autonomous agents, a Bayesian edge regularization is applied to the resulting edges. A new Markov Random Field (MRF) model is introduced to model the edge smoothness, used as a prior in the edge regularization. The experimental results obtained with real images from the ABW database, are compared to those of some typical methods for range image segmentation. The comparison results show a good potential of the proposed approach for scene understanding in range images, regarding both segmentation efficiency, and detection accuracy

    An Agent-Based Approach for Range Image Segmentation

    No full text
    International audienceIn this paper an agent-based segmentation approach is presented and evaluated. The approach consists in using a high number of autonomous agents for the segmentation of a range image in its different planar regions. The moving agents perform cooperative and competitive actions on the image pixels allowing a robust extraction of regions and an accurate edge detection. An artificial potential field, created around pixels of interest, allows the agents to be gathered around edges and noise regions. The results obtained with real images are compared to those of some typical methods for range image segmentation. The comparison results show the potential of the proposed approach for scene understanding in range images regarding both segmentation efficiency, and detection accuracy

    Un modèle d'interaction multi-agents basé sur le champ de potentiel appliqué à la segmentation d'images de profondeur

    No full text
    International audienceDans cet article, nous présentons un système multi-agents basé sur le champ de potentiel pour la détection de contours dans les images de profondeurs. Une populations d'agents situés est lancée afin d'explorer l'image. Lors de leur mouvement, chaque agent lisse l'image le long de son parcours, et altère les pixels qui n'appartiennent pas à la surface sur laquelle il se déplace. Sur la bordure entre deux surfaces adjacentes, deux groupes d'agents seront en compétition pour inclure les pixels de la bordure à leurs surfaces respectives. Cette compétition va préserver les contours contre l'effacement. Cependant les régions de bruits sont effacées par un lissage successif, opéré par les agents se déplaçant sur la même surface. Un champ de potentiel, crée autours des pixels altérés, permet aux agents qui sont dans le voisinage de ces pixels de se regrouper et de concentrer leur action autours de ces pixels. Après plusieurs altérations du même pixel, le champ de potentiel autour de ce pixel est relaxé, permettant aux agents qui sont sous son influence de se libérer et d'explorer d'autres régions de l'image
    corecore